Skip to content

Document inplace_on_inputs method with working example and remove TODO comment #1575

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Copilot
Copy link

@Copilot Copilot AI commented Aug 9, 2025

This PR addresses the documentation of the inplace_on_inputs method as requested in the issue. The method allows Op classes to create inplace versions of themselves for specified inputs, enabling memory optimizations while maintaining correctness.

Changes Made

  1. Removed TODO comment from pytensor/graph/op.py in the inplace_on_inputs method definition
  2. Added comprehensive documentation to doc/extending/inplace.rst explaining the inplace_on_inputs method
  3. Included a complete working example showing how to implement and use the method

Documentation Added

The new documentation section includes:

  • Purpose: How the method creates inplace versions of ops for memory optimization
  • Current usage: Primarily used with Blockwise operations through PyTensor's rewriting system
  • Future extension: Will be extended to support core ops directly
  • Implementation guidelines: How to properly implement the method in custom ops
  • Complete working example: A self-contained example with MyOpWithInplace class showing:
    • Proper inplace_on_inputs implementation
    • Blockwise usage demonstrating automatic inplace optimization
    • Expected output showing the inplace version is used
    • Test demonstrating actual inplace array modification behavior

The example demonstrates how PyTensor automatically calls inplace_on_inputs when optimizing Blockwise operations, and shows the resulting memory-efficient inplace behavior when inputs are marked as mutable.

This method is automatically used by PyTensor's optimization pipeline and is crucial for efficient memory management in Blockwise operations. The documentation makes it clear that users typically don't need to call this method directly, but Op developers should understand how to implement it properly.


💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.

@ricardoV94 ricardoV94 changed the title [WIP] Mention this method in the Pytensor docs page on inplace (alias and destruction). It only works now with Blockwised Ops but will be extended to core ops as well. It should return a version of the Op that inplaces in as many of the requested inputs as p... Document Op.inplace_on_inputs Aug 9, 2025
@Copilot Copilot AI changed the title Document Op.inplace_on_inputs Document inplace_on_inputs method and remove TODO comment Aug 9, 2025
@Copilot Copilot AI requested a review from ricardoV94 August 9, 2025 03:59
Copilot finished work on behalf of ricardoV94 August 9, 2025 03:59
and returns a new `Op` instance that will perform inplace operations only on
those inputs where it is safe and beneficial to do so.

.. testcode::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot

Instead of this code snippet provide a minimal self-contained example with a simple Op that is Blockwise, and show that if we set the input to be Mutable the inplace version will be introduced. Something like:

import pytensor
from pytensor.graph.op import Op
from pytensor.tensor.blockwise import Blockwise

class MyOpWithInplace(Op):
  __props__ = ("destroy_a",)

  def __init__(self, destroy_a):
    self.destroy_a = destroy_a
    if destroy_a:
      self.destroy_map = {0: [0]}

  def make_node(self, a):
    return Apply(self, [a], [a.type()]

  def perform(self, node, inputs, output_storage):
    [a] = inputs
    if not self.destroy_a:
      a = a.copy()
    a[0] += 1
    output_storage[0][0] = a

  # Fill this in
  def inplace_on_inputs(...):
    ...

a = pt.vector("a")
# Only Blockwise trigger inplace automatically for now
# Since the Blockwise isn't needed in this case, it will be removed after the inplace optimization
op = Blockwise(MyOpWithInplace(destroy_a=False), signature="(a)->=(a)")
out = op(a)

# Give PyTensor permission to inplace on user provided inputs
fn = pytensor.function([pytensor.In(a, mutable=True)], out)

# Confirm that we have the inplace version of the Op 
fn.dprint(print_destroy_map=True)

test_a = np.zeros(5)
fn(test_a)
print(test_a)  # Should be modified in place

Then create the expected test output as well, which should include both the dprint, and test_a with a 1 after the first call. Break the code so the dprint check comes before the array check.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've replaced the simple code snippet with a complete self-contained example as requested. The new example includes:

  • A full MyOpWithInplace class implementing inplace_on_inputs properly
  • Demonstration using Blockwise to trigger automatic inplace optimization
  • Expected dprint output showing the inplace version (destroy_a=True) is used
  • Test showing actual inplace array modification behavior

The code is broken into two testcode blocks as requested - first showing the dprint output, then the array modification test. Both include corresponding testoutput blocks with expected results.

Commit: a0f8eff

@Copilot Copilot AI changed the title Document inplace_on_inputs method and remove TODO comment Document inplace_on_inputs method with working example and remove TODO comment Aug 12, 2025
@Copilot Copilot AI requested a review from ricardoV94 August 12, 2025 15:55
Copilot finished work on behalf of ricardoV94 August 12, 2025 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants